🌻 ! EES2026 silva

24 Jan 2026

The conference theme “Evaluation for Vibrant Democracies” highlights the vital role of evaluation in strengthening democratic values, public accountability, inclusive decision-making, and learning across complex systems.As societies face intersecting political, social, environmental, and technological challenges, evaluation plays a critical role in providing credible evidence, amplifying diverse perspectives, and supporting informed public debate.Call B proposals should be aligned either with one of the strands selected through Call A (to be published before Call B opens) or with one of the general conference strands:

  • New Methods – Innovations in AI, digital data, and mixed methods

  • Systemic Learning – Evaluation for complexity, transformation, and systems change

  • Responsiveness – Adaptive, inclusive, and democratically grounded evaluation practice

▶ 3. Strand SelectionAccepted strands from Call A are listed on the official website and available for selection in the submission portal.Strand coordinators and the EES Conference Programme Committee will review submissions for thematic fit. Based on this review, your proposal may be reassigned to a different strand to improve coherence and overall programme balance. Submitters are expected to accept this allocation.▶ 4. Accepted Submission Modalities/TypesEES welcomes the following individual and group submission formats:

  • Panels – 90 minutes; three to four interconnected presentations with a chair

  • Fishbowls / Birds of a Feather – 90 minutes; highly participatory dialogue formats

  • Solution-Focused Sessions – Practical, collaborative problem-solving formats

  • Meet the Evaluation – multi-perspective exploration of a complex evaluation

  • Sparking Discussions – 90 minutes; 6–8 short, provocative talks to stimulate debate

  • Individual Papers – Papers will be grouped into 90-minute sessions

  • Posters – Traditional or digital poster presentations

  • Innovative Modality – Creative, non-traditional session formats that actively engage participants and go beyond standard panels or papers

▶ 5. Submission RequirementsEach submission must be in English and include:

  • Title and session modality/type

  • Selected strand

  • Abstract:

  • Up to 500 words for sessions and panels

  • Up to 300 words for individual papers

  • Rationale and objectives

  • Presenter(s) or coordinator(s):

  • Names, affiliations, and short biographies

  • Session structure or presentation approach, including interactive or innovative elements

  • 5–10 keywords

For group submissions, one person must submit the proposal on behalf of all contributors and will act as the primary contact.▶ 6. Submission Limits and Participation Rules

  • An individual may participate in a maximum of two scheduled programme events (sessions) in any role (presenter, moderator, or discussant).

  • An exception applies to Thematic Working Group (TWG) leaders submitting a TWG-related contribution, who may be involved in up to three events.

  • The only other exception is for individuals invited by EES to chair a session.

If an individual exceeds these limits, submissions may be rejected or removed from the programme.▶ 7. Review Process and CriteriaAll individual submissions are subject to a double-blind peer review conducted by members of the EES Conference Programme Committee and strand coordinators. Meta-review will be conducted by reviewers not involved in strand coordination.CriterionDescriptionRelevance & Public InterestAlignment with the conference theme and selected strandQualityClarity, coherence, and rigor of the proposalInnovationOriginality of ideas, approaches, or methodsEngagementLevel of audience interaction and participatory designNote: Relevance and quality will carry the greatest weight in the review process. Appropriate representation of diversity among presenters and topics (e.g., gender, geographic location, cultural context) is expected. ▶ 8. Registration and Practical Information

  • All accepted presenters and co-speakers must register for the conference and pay the applicable registration fee.

  • EES does not provide funding for presenters. All participants are responsible for their own travel, accommodation, and registration costs.

  • Once submitted, proposals cannot be edited directly. Please contact us at ees2026@kuonitumlare.com to request a change.

▶ 9. Contact & Support

  • Visit the FAQ section on the EES 2026 Conference official website.

  • If you need any assistance with submitting your proposal(s), please do not hesitate to reach out to us at ees2026@kuonitumlare.com.

  • For questions related to content or expertise please contact the EES Conference Programme Coordinators: Marta Semplici or John LaVelle.  

| Privacy Policy |

S01_ New Methods

Evaluation thrives when methods evolve. From established approaches to emerging tools and the transformative potential of Artificial Intelligence, the aim is that this conference allows us to examine how technology and innovation can enhance rigor, relevance, and impact which is critical to strong democracies. Simultaneously allowing space for the exploration into adaptive, inclusive, and future-oriented methods that are relevant to the transformation into, and thereafter sustainability of, better societies.

Our submission so far#

Modality: Innovative Modality#

Selected Strand#

S01_ New Methods

Title#

Where is evaluation going? A live comparison of AI interviewing and AI moderation

Abstract#

This innovative session gives participants a direct, critical experience of two emerging AI-supported evaluation methods: conversational interviewing and AI-assisted moderation. Using Qualia, participants will first take part in a short live interview on a shared prompt about the future of evaluation: where the field is going, what new capacities matter most, and what risks or trade-offs evaluators should now take seriously. The system will then generate a rapid synthesis of the interviews.

The session does not treat that synthesis as a final answer. Instead, it uses it as the basis for a structured comparison. Participants will split into small groups. Some groups will discuss the topic without AI support, while others will continue the discussion with AI assistance, using the emerging synthesis as context. In plenary, we will compare the outputs of these different processes and ask what each method made easier, harder, more visible, or less visible.

This session is explicitly aligned with the New Methods strand. Its purpose is not simply to showcase a tool, but to let evaluators test and interrogate AI as a methodological component in real time. By experiencing AI interviewing, rapid synthesis, and AI-supported discussion for themselves, participants can assess questions of consistency, scale, responsiveness, transparency, and loss of nuance from direct experience rather than abstract debate.

The result is both a substantive discussion about the future of evaluation and a practical, collective reflection on what AI-supported methods can and cannot contribute to rigorous, participatory, and democratically grounded evaluation practice.

Rationale and Objectives#

This proposal addresses a live methodological question for the field: if AI can now interview participants, synthesise qualitative material rapidly, and help structure group discussion, how should evaluators use these capacities without flattening meaning, weakening judgement, or displacing human deliberation? That makes it a clear fit for the New Methods strand, which focuses on innovations in AI, digital data, and mixed methods.

The session is designed as a live mini-method comparison rather than a conventional panel. Participants do not only hear claims about AI-supported evaluation methods; they experience them, compare them, and reflect on their implications. This makes the session both practical and critical.

Objectives:

  • To let participants experience AI interviewing as a method for generating qualitative evaluation data.
  • To examine what rapid AI synthesis contributes, and what it may omit or distort.
  • To compare AI-assisted and non-AI small-group deliberation on the same substantive question.
  • To identify where these methods may strengthen rigor, reach, consistency, and responsiveness, and where they may create risks for nuance, accountability, or inclusive democratic dialogue.
  • To generate grounded reflections on how AI-supported methods might be used well in future evaluation practice.

Session Structure & Presentation Approach#

This is a 90-minute participatory session built as a live comparative method test rather than a standard panel.

Suggested structure:

  • 0-10 minutes: Brief framing of the session question, the New Methods strand context, and the comparison we are about to run.
  • 10-25 minutes: Participants complete a short Qualia interview on their own devices in response to the prompt, "Where is evaluation going, and what should evaluators be most alert to next?" Participants who begin before the session can continue from the same conversation.
  • 25-35 minutes: The AI generates a rapid synthesis of the interview material. This synthesis is presented back to the room as an interim artefact, not as a definitive conclusion.
  • 35-60 minutes: Participants split into small groups for a comparison exercise. Some groups discuss the question without AI support. Other groups continue with AI-assisted moderation using the synthesis as context for further probing and refinement.
  • 60-80 minutes: Plenary comparison of outputs across the two modes. We compare substantive conclusions, recommendations, omissions, and differences in process.
  • 80-90 minutes: Closing reflection on what this experiment suggests about the opportunities and limits of AI-supported methods in evaluation.

Innovative and interactive elements:

  • Message/ question : you can of course make a convincing synthesis with AI, but what does it miss out?  

Presenters / Coordinators#

[To complete with names, affiliations, and short biographies.]

Keywords#

AI in evaluation; qualitative methods; conversational interviewing; AI moderation; participatory methods; methodological innovation; rapid synthesis; democratic evaluation